5,041 research outputs found

    Nodes and Arcs: Concept Map, Semiotics, and Knowledge Organization.

    Get PDF
    Purpose – The purpose of the research reported here is to improve comprehension of the socially-negotiated identity of concepts in the domain of knowledge organization. Because knowledge organization as a domain has as its focus the order of concepts, both from a theoretical perspective and from an applied perspective, it is important to understand how the domain itself understands the meaning of a concept. Design/methodology/approach – The paper provides an empirical demonstration of how the domain itself understands the meaning of a concept. The paper employs content analysis to demonstrate the ways in which concepts are portrayed in KO concept maps as signs, and they are subjected to evaluative semiotic analysis as a way to understand their meaning. The frame was the entire population of formal proceedings in knowledge organization – all proceedings of the International Society for Knowledge Organization’s international conferences (1990-2010) and those of the annual classification workshops of the Special Interest Group for Classification Research of the American Society for Information Science and Technology (SIG/CR). Findings – A total of 344 concept maps were analyzed. There was no discernible chronological pattern. Most concept maps were created by authors who were professors from the USA, Germany, France, or Canada. Roughly half were judged to contain semiotic content. Peirceian semiotics predominated, and tended to convey greater granularity and complexity in conceptual terminology. Nodes could be identified as anchors of conceptual clusters in the domain; the arcs were identifiable as verbal relationship indicators. Saussurian concept maps were more applied than theoretical; Peirceian concept maps had more theoretical content. Originality/value – The paper demonstrates important empirical evidence about the coherence of the domain of knowledge organization. Core values are conveyed across time through the concept maps in this population of conference paper

    Bounds for graph regularity and removal lemmas

    Get PDF
    We show, for any positive integer k, that there exists a graph in which any equitable partition of its vertices into k parts has at least ck^2/\log^* k pairs of parts which are not \epsilon-regular, where c,\epsilon>0 are absolute constants. This bound is tight up to the constant c and addresses a question of Gowers on the number of irregular pairs in Szemer\'edi's regularity lemma. In order to gain some control over irregular pairs, another regularity lemma, known as the strong regularity lemma, was developed by Alon, Fischer, Krivelevich, and Szegedy. For this lemma, we prove a lower bound of wowzer-type, which is one level higher in the Ackermann hierarchy than the tower function, on the number of parts in the strong regularity lemma, essentially matching the upper bound. On the other hand, for the induced graph removal lemma, the standard application of the strong regularity lemma, we find a different proof which yields a tower-type bound. We also discuss bounds on several related regularity lemmas, including the weak regularity lemma of Frieze and Kannan and the recently established regular approximation theorem. In particular, we show that a weak partition with approximation parameter \epsilon may require as many as 2^{\Omega(\epsilon^{-2})} parts. This is tight up to the implied constant and solves a problem studied by Lov\'asz and Szegedy.Comment: 62 page

    L-selectin mediated leukocyte tethering in shear flow is controlled by multiple contacts and cytoskeletal anchorage facilitating fast rebinding events

    Full text link
    L-selectin mediated tethers result in leukocyte rolling only above a threshold in shear. Here we present biophysical modeling based on recently published data from flow chamber experiments (Dwir et al., J. Cell Biol. 163: 649-659, 2003) which supports the interpretation that L-selectin mediated tethers below the shear threshold correspond to single L-selectin carbohydrate bonds dissociating on the time scale of milliseconds, whereas L-selectin mediated tethers above the shear threshold are stabilized by multiple bonds and fast rebinding of broken bonds, resulting in tether lifetimes on the timescale of 10−110^{-1} seconds. Our calculations for cluster dissociation suggest that the single molecule rebinding rate is of the order of 10410^4 Hz. A similar estimate results if increased tether dissociation for tail-truncated L-selectin mutants above the shear threshold is modeled as diffusive escape of single receptors from the rebinding region due to increased mobility. Using computer simulations, we show that our model yields first order dissociation kinetics and exponential dependence of tether dissociation rates on shear stress. Our results suggest that multiple contacts, cytoskeletal anchorage of L-selectin and local rebinding of ligand play important roles in L-selectin tether stabilization and progression of tethers into persistent rolling on endothelial surfaces.Comment: 9 pages, Revtex, 4 Postscript figures include

    Heavy Hitters and the Structure of Local Privacy

    Full text link
    We present a new locally differentially private algorithm for the heavy hitters problem which achieves optimal worst-case error as a function of all standardly considered parameters. Prior work obtained error rates which depend optimally on the number of users, the size of the domain, and the privacy parameter, but depend sub-optimally on the failure probability. We strengthen existing lower bounds on the error to incorporate the failure probability, and show that our new upper bound is tight with respect to this parameter as well. Our lower bound is based on a new understanding of the structure of locally private protocols. We further develop these ideas to obtain the following general results beyond heavy hitters. ∙\bullet Advanced Grouposition: In the local model, group privacy for kk users degrades proportionally to ≈k\approx \sqrt{k}, instead of linearly in kk as in the central model. Stronger group privacy yields improved max-information guarantees, as well as stronger lower bounds (via "packing arguments"), over the central model. ∙\bullet Building on a transformation of Bassily and Smith (STOC 2015), we give a generic transformation from any non-interactive approximate-private local protocol into a pure-private local protocol. Again in contrast with the central model, this shows that we cannot obtain more accurate algorithms by moving from pure to approximate local privacy

    Compressibility and probabilistic proofs

    Full text link
    We consider several examples of probabilistic existence proofs using compressibility arguments, including some results that involve Lov\'asz local lemma.Comment: Invited talk for CiE 2017 (full version

    Detecting and Characterizing Small Dense Bipartite-like Subgraphs by the Bipartiteness Ratio Measure

    Full text link
    We study the problem of finding and characterizing subgraphs with small \textit{bipartiteness ratio}. We give a bicriteria approximation algorithm \verb|SwpDB| such that if there exists a subset SS of volume at most kk and bipartiteness ratio Ξ\theta, then for any 0<Ï”<1/20<\epsilon<1/2, it finds a set Sâ€ČS' of volume at most 2k1+Ï”2k^{1+\epsilon} and bipartiteness ratio at most 4Ξ/Ï”4\sqrt{\theta/\epsilon}. By combining a truncation operation, we give a local algorithm \verb|LocDB|, which has asymptotically the same approximation guarantee as the algorithm \verb|SwpDB| on both the volume and bipartiteness ratio of the output set, and runs in time O(Ï”2ξ−2k1+Ï”ln⁥3k)O(\epsilon^2\theta^{-2}k^{1+\epsilon}\ln^3k), independent of the size of the graph. Finally, we give a spectral characterization of the small dense bipartite-like subgraphs by using the kkth \textit{largest} eigenvalue of the Laplacian of the graph.Comment: 17 pages; ISAAC 201

    Efficiently decoding Reed-Muller codes from random errors

    Full text link
    Reed-Muller codes encode an mm-variate polynomial of degree rr by evaluating it on all points in {0,1}m\{0,1\}^m. We denote this code by RM(m,r)RM(m,r). The minimal distance of RM(m,r)RM(m,r) is 2m−r2^{m-r} and so it cannot correct more than half that number of errors in the worst case. For random errors one may hope for a better result. In this work we give an efficient algorithm (in the block length n=2mn=2^m) for decoding random errors in Reed-Muller codes far beyond the minimal distance. Specifically, for low rate codes (of degree r=o(m)r=o(\sqrt{m})) we can correct a random set of (1/2−o(1))n(1/2-o(1))n errors with high probability. For high rate codes (of degree m−rm-r for r=o(m/log⁡m)r=o(\sqrt{m/\log m})), we can correct roughly mr/2m^{r/2} errors. More generally, for any integer rr, our algorithm can correct any error pattern in RM(m,m−(2r+2))RM(m,m-(2r+2)) for which the same erasure pattern can be corrected in RM(m,m−(r+1))RM(m,m-(r+1)). The results above are obtained by applying recent results of Abbe, Shpilka and Wigderson (STOC, 2015), Kumar and Pfister (2015) and Kudekar et al. (2015) regarding the ability of Reed-Muller codes to correct random erasures. The algorithm is based on solving a carefully defined set of linear equations and thus it is significantly different than other algorithms for decoding Reed-Muller codes that are based on the recursive structure of the code. It can be seen as a more explicit proof of a result of Abbe et al. that shows a reduction from correcting erasures to correcting errors, and it also bares some similarities with the famous Berlekamp-Welch algorithm for decoding Reed-Solomon codes.Comment: 18 pages, 2 figure
    • 

    corecore